perm filename CORY.1[LET,JMC] blob
sn#697397 filedate 1983-01-27 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 @make(letterhead,Phone"497-4330",Who "John McCarthy", Logo Old, Department CSD)
C00005 ENDMK
Cā;
@make(letterhead,Phone"497-4330",Who "John McCarthy", Logo Old, Department CSD)
@style[indent 5]
@blankspace ( 8 lines)
@begin(address)
Mr. Christopher T. Cory, Managing Editor
Psychology Today
One Park Avenue,
New York, N.Y. 10016
@end(address)
@greeting(Dear Mr. Cory:)
@begin (body)
The article about me and my views by Philip J. Hilts was
pretty good. However, I wonder if you would be interested in
an article by me on the subject of "Ascribing Mental Qualities
to Machines". The idea is that it is sometimes appropriate to use language
like "believes", "wants", "hopes" and "knows" in referring to
machines like computer programs and even thermostats. Simple
machines are readily described without mental terms, but we are
often in a position where what we know about the state of a
complicated machine is very difficult to express without mental
terms. However,
mental qualities must be ascribed conservatively according to
the nature of the machine in question, and most machines have
very few beliefs.
The topic will be of general interest, will generate some
controversy, and represents one of the areas of collision between
most philosophers and most researchers in artificial intelligence.
No technical knowledge is required to understand what the dispute
is about.
While I published a paper with the above title, it proved
difficult even for AI people and philosophers, and I have in mind
a fresh and simplified treatment. Illustrations are possible, and
I have already thought of two.
@end(body)
Sincerely,
John McCarthy
Professor of Computer Science